The model of an environment plays a crucial role in autonomous mobile robots, by providing them with the necessary task-relevant information. As robots become more intelligent, they need a richer and more expressive environment model. This model is a map that contains a structured description of the environment that can be used as the robot’s knowledge for several tasks, such as planning and reasoning. In this work, we propose a framework that allows to capture important environment descriptors, such as functionality and ownership of the robot’s surrounding objects, through verbal interaction. Specifically, we propose a corpus of verbal descriptions annotated with frame-like structures. We use the proposed dataset to train two multi-task neural architectures. We compare the two architectures through an experimental evaluation, discussing the design choices. Finally, we describe the creation of a simple interactive interface with our system, implemented through the trained model. The novelties of this work are: (i) the definition of a new problem, i.e., addressing different object descriptors, that plays a crucial role for the robot’s tasks accomplishment; (ii) a specialized corpus to support the creation of rich Semantic Maps; (iii) the design of different neural architectures, and their experimental evaluation over the proposed dataset; (iv) a simple interface for the actual usage of the proposed resources.

Capturing Frame-Like Object Descriptors in Human Augmented Mapping / Faridghasemnia, M.; Vanzo, A.; Nardi, D.. - 11946:(2019), pp. 392-404. (Intervento presentato al convegno 18th International Conference of the Italian Association for Artificial Intelligence, AI*IA 2019 tenutosi a Rende; Italy) [10.1007/978-3-030-35166-3_28].

Capturing Frame-Like Object Descriptors in Human Augmented Mapping

Faridghasemnia M.
;
Vanzo A.;Nardi D.
2019

Abstract

The model of an environment plays a crucial role in autonomous mobile robots, by providing them with the necessary task-relevant information. As robots become more intelligent, they need a richer and more expressive environment model. This model is a map that contains a structured description of the environment that can be used as the robot’s knowledge for several tasks, such as planning and reasoning. In this work, we propose a framework that allows to capture important environment descriptors, such as functionality and ownership of the robot’s surrounding objects, through verbal interaction. Specifically, we propose a corpus of verbal descriptions annotated with frame-like structures. We use the proposed dataset to train two multi-task neural architectures. We compare the two architectures through an experimental evaluation, discussing the design choices. Finally, we describe the creation of a simple interactive interface with our system, implemented through the trained model. The novelties of this work are: (i) the definition of a new problem, i.e., addressing different object descriptors, that plays a crucial role for the robot’s tasks accomplishment; (ii) a specialized corpus to support the creation of rich Semantic Maps; (iii) the design of different neural architectures, and their experimental evaluation over the proposed dataset; (iv) a simple interface for the actual usage of the proposed resources.
2019
18th International Conference of the Italian Association for Artificial Intelligence, AI*IA 2019
Corpus annotator; Human robot interaction; Natural Language understanding; Neural networks; Semantic mapping; Semantic mapping corpus
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
Capturing Frame-Like Object Descriptors in Human Augmented Mapping / Faridghasemnia, M.; Vanzo, A.; Nardi, D.. - 11946:(2019), pp. 392-404. (Intervento presentato al convegno 18th International Conference of the Italian Association for Artificial Intelligence, AI*IA 2019 tenutosi a Rende; Italy) [10.1007/978-3-030-35166-3_28].
File allegati a questo prodotto
File Dimensione Formato  
Faridghasemnia_Capturing_2019.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 370.5 kB
Formato Adobe PDF
370.5 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1381965
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact